Convergence Analysis of the Approximate Proximal Splitting Method for Non-Smooth Convex Optimization

نویسنده

  • Mojtaba Kadkhodaie Elyaderani
چکیده

Consider a class of convex minimization problems for which the objective function is the sum of a smooth convex function and a non-smooth convex regularity term. This class of problems includes several popular applications such as compressive sensing and sparse group LASSO. In this thesis, we introduce a general class of approximate proximal splitting (APS) methods for solving such minimization problems. Methods in the APS class include many well-known algorithms such as the proximal splitting method (PSM), the block coordinate descent method (BCD) and the approximate gradient projection methods for smooth convex optimization. We establish the linear convergence of APS methods under a local error bound assumption. Since the latter is known to hold for compressive sensing and sparse group LASSO problems, our analysis implies the linear convergence of the BCD method for these problems without strong convexity assumption.

برای دانلود رایگان متن کامل این مقاله و بیش از 32 میلیون مقاله دیگر ابتدا ثبت نام کنید

ثبت نام

اگر عضو سایت هستید لطفا وارد حساب کاربری خود شوید

منابع مشابه

A Multi-step Inertial Forward-Backward Splitting Method for Non-convex Optimization

We propose a multi-step inertial Forward–Backward splitting algorithm for minimizing the sum of two non-necessarily convex functions, one of which is proper lower semi-continuous while the other is differentiable with a Lipschitz continuous gradient. We first prove global convergence of the algorithm with the help of the Kurdyka-Łojasiewicz property. Then, when the non-smooth part is also partl...

متن کامل

On some steplength approaches for proximal algorithms

We discuss a number of novel steplength selection schemes for proximal-based convex optimization algorithms. In particular, we consider the problem where the Lipschitz constant of the gradient of the smooth part of the objective function is unknown. We generalize two optimization algorithms of Khobotov type and prove convergence. We also take into account possible inaccurate computation of the ...

متن کامل

Convergence rate of inexact proximal point methods with relative error criteria for convex optimization

In this paper, we consider a framework of inexact proximal point methods for convex optimization that allows a relative error tolerance in the approximate solution of each proximal subproblem and establish its convergence rate. We then show that the well-known forward-backward splitting algorithm for convex optimization belongs to this framework. Finally, we propose and establish the iteration-...

متن کامل

Convergence analysis of the Peaceman-Rachford splitting method for nonsmooth convex optimization

In this paper, we focus on the convergence analysis for the application of the PeacemanRachford splitting method to a convex minimization model whose objective function is the sum of a smooth and nonsmooth convex functions. The sublinear convergence rate in term of the worst-case O(1/t) iteration complexity is established if the gradient of the smooth objective function is assumed to be Lipschi...

متن کامل

Non-smooth Non-convex Bregman Minimization: Unification and new Algorithms

We propose a unifying algorithm for non-smooth non-convex optimization. The algorithm approximates the objective function by a convex model function and finds an approximate (Bregman) proximal point of the convex model. This approximate minimizer of the model function yields a descent direction, along which the next iterate is found. Complemented with an Armijo-like line search strategy, we obt...

متن کامل

ذخیره در منابع من


  با ذخیره ی این منبع در منابع من، دسترسی به آن را برای استفاده های بعدی آسان تر کنید

عنوان ژورنال:

دوره   شماره 

صفحات  -

تاریخ انتشار 2014